首页> 外文OA文献 >Be Your Own Prada: Fashion Synthesis with Structural Coherence
【2h】

Be Your Own Prada: Fashion Synthesis with Structural Coherence

机译:成为你自己的prada:结构连贯的时尚综合

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We present a novel and effective approach for generating new clothing on awearer through generative adversarial learning. Given an input image of aperson and a sentence describing a different outfit, our model "redresses" theperson as desired, while at the same time keeping the wearer and her/his poseunchanged. Generating new outfits with precise regions conforming to a languagedescription while retaining wearer's body structure is a new challenging task.Existing generative adversarial networks are not ideal in ensuring globalcoherence of structure given both the input photograph and language descriptionas conditions. We address this challenge by decomposing the complex generativeprocess into two conditional stages. In the first stage, we generate aplausible semantic segmentation map that obeys the wearer's pose as a latentspatial arrangement. An effective spatial constraint is formulated to guide thegeneration of this semantic segmentation map. In the second stage, a generativemodel with a newly proposed compositional mapping layer is used to render thefinal image with precise regions and textures conditioned on this map. Weextended the DeepFashion dataset [8] by collecting sentence descriptions for79K images. We demonstrate the effectiveness of our approach through bothquantitative and qualitative evaluations. A user study is also conducted. Thecodes and the data are available at http://mmlab.ie.cuhk.edu.hk/projects/FashionGAN/.
机译:我们提出了一种新颖有效的方法,通过生成对抗性学习在穿戴者身上产生新衣服。给定一个人的输入图像和描述不同衣服的句子,我们的模型可以根据需要“纠正”该人,同时保持佩戴者及其姿势不变。在保持佩戴者身体结构的同时,生成符合语言描述的精确区域的新服装是一项新的挑战性任务。鉴于输入图片和语言描述作为条件,现有的生成对抗网络对于确保结构的整体连贯性并不理想。我们通过将复杂的生成过程分解为两个条件阶段来应对这一挑战。在第一个阶段,我们生成合理的语义分割图,以潜在的空间排列方式服从佩戴者的姿势。制定了有效的空间约束条件来指导该语义分割图的生成。在第二阶段中,使用具有新提议的成分映射图层的生成模型来渲染最终图像,并以该地图为条件精确绘制区域和纹理。通过收集79K图像的句子描述,扩展了DeepFashion数据集[8]。我们通过定量和定性评估证明了我们方法的有效性。还进行了用户研究。代码和数据可从http://mmlab.ie.cuhk.edu.hk/projects/FashionGAN/获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号